68 research outputs found

    Simulating and analyzing commercial workloads and computer systems

    Get PDF

    Alignment of the CMS tracker with LHC and cosmic ray data

    Get PDF
    © CERN 2014 for the benefit of the CMS collaboration, published under the terms of the Creative Commons Attribution 3.0 License by IOP Publishing Ltd and Sissa Medialab srl. Any further distribution of this work must maintain attribution to the author(s) and the published article's title, journal citation and DOI.The central component of the CMS detector is the largest silicon tracker ever built. The precise alignment of this complex device is a formidable challenge, and only achievable with a significant extension of the technologies routinely used for tracking detectors in the past. This article describes the full-scale alignment procedure as it is used during LHC operations. Among the specific features of the method are the simultaneous determination of up to 200 000 alignment parameters with tracks, the measurement of individual sensor curvature parameters, the control of systematic misalignment effects, and the implementation of the whole procedure in a multi-processor environment for high execution speed. Overall, the achieved statistical accuracy on the module alignment is found to be significantly better than 10μm

    Discutindo a educação ambiental no cotidiano escolar: desenvolvimento de projetos na escola formação inicial e continuada de professores

    Get PDF
    A presente pesquisa buscou discutir como a Educação Ambiental (EA) vem sendo trabalhada, no Ensino Fundamental e como os docentes desta escola compreendem e vem inserindo a EA no cotidiano escolar., em uma escola estadual do município de Tangará da Serra/MT, Brasil. Para tanto, realizou-se entrevistas com os professores que fazem parte de um projeto interdisciplinar de EA na escola pesquisada. Verificou-se que o projeto da escola não vem conseguindo alcançar os objetivos propostos por: desconhecimento do mesmo, pelos professores; formação deficiente dos professores, não entendimento da EA como processo de ensino-aprendizagem, falta de recursos didáticos, planejamento inadequado das atividades. A partir dessa constatação, procurou-se debater a impossibilidade de tratar do tema fora do trabalho interdisciplinar, bem como, e principalmente, a importância de um estudo mais aprofundado de EA, vinculando teoria e prática, tanto na formação docente, como em projetos escolares, a fim de fugir do tradicional vínculo “EA e ecologia, lixo e horta”.Facultad de Humanidades y Ciencias de la Educació

    Automated full-system power characterization

    No full text
    A new framework automatically generates full-system multicore powermarks, or synthetic programs with desired power characteristics on multicore server platforms. The framework constructs full-system power models with error bounds on the power estimates and guides the design of energy-efficient and cost-efficient server and data center infrastructures

    Studying hardware and software trade-offs for a real-life web 2.0 workload

    No full text
    Designing data centers for Web 2.0 social networking applications is a major challenge because of the large number of users, the large scale of the data centers, the distributed application base, and the cost sensitivity of a data center facility. Optimizing the data center for performance per dollar is far from trivial. In this paper, we present a case study characterizing and evaluating hardware/software design choices for a real-life Web 2.0 workload. We sample the Web 2.0 workload both in space and in time to obtain a reduced workload that can be replayed, driven by input data captured from a real data center. The reduced workload captures the important services (and their interactions) and allows for evaluating how hardware choices affect end-user experience (as measured by response times). We consider the Netlog workload, a popular and commercially deployed social networking site with a large user base, and we explore hardware trade-offs in terms of core count, clock frequency, traditional hard disks versus solid-state disks, etc., for the different servers, and we obtain several interesting insights. Further, we present two use cases illustrating how our characterization method can be used for guiding hardware purchasing decisions as well as software optimizations

    VSim: Simulating Multi-Server Setups at Near Native Hardware Speed

    No full text
    Simulating contemporary computer systems is a challenging endeavor, especially when it comes to simulating high-end setups involving multiple servers. The simulation environment needs to run complete software stacks, including operating systems, middleware, and application software, and it needs to simulate network and disk activity next to CPU performance. In addition, it needs the ability to scale out to a large number of server nodes while attaining good accuracy and reasonable simulation speeds. This paper presents VSim, a novel simulation methodology for multi-server systems. VSim leverages virtualization technology for simulating a target system on a host system. VSim controls CPU, network and disk performance on the host, and it gives the illusion to the software stack to run on a target system through time dilation. VSim can simulate multiple targets per host, and it employs a distributed simulation scheme across multiple hosts for simulations at scale. Our experimental results demonstrate VSim’s accuracy: typical errors are below 6 % for CPU, disk, and network performance. Real-life workloads involving the Lucene search engine and the Olio Web 2.0 benchmark illustrate VSim’s utility and accuracy (average error of 3.2%). Our current setup can simulate up to five target servers per host, and we provide a Hadoop workload case study in which we simulate 25 servers. These simulation results are obtained at a simulation slowdown of one order of magnitude compared to native hardware execution

    Trends in server energy proportionality

    No full text
    This perspective analyzes trends in energy-proportionality of contemporary servers. Using power/performance numbers from a broad set of commercial machines, we analyze how energy-proportionality has evolved over the past three years. We evaluate to what extent SPECpower quantifies energy-proportionality, and we study how much total energy can be saved by making servers even more energy-proportional

    Fast, accurate and validated full-system software simulation of x86 hardware

    No full text
    This article presents a fast and accurate interval-based CPU timing model that is easily implemented and integrated in the COTSon full-system simulation infrastructure. Validation against real x86 hardware demonstrates the timing model's accuracy. The end result is a software simulator that faithfully simulates x86 hardware at a speed in the tens of MIPS range

    Optimizing the datacenter for data-centric workloads

    No full text
    The amount of data produced on the internet is growing rapidly. Along with data explosion comes the trend towards more and more diverse data, including rich media such as audio and video. Data explosion and diversity leads to the emergence of data-centric workloads to manipulate, manage and analyze the vast amounts of data. These data-centric workloads are likely to run in the background and include application domains such as data mining, indexing, compression, encryption, audio/video manipulation, data warehousing, etc. Given that datacenters are very much cost sensitive, reducing the cost of a single component by a small fraction immediately translates into huge cost savings because of the large scale. Hence, when designing a datacenter, it is important to understand data-centric workloads and optimize the ensemble for these workloads so that the best possible performance per dollar is achieved. This paper studies how the emerging class of data-centric workloads affects design decisions in the datacenter. Through the architectural simulation of minutes of run time on a validated full-system x86 simulator, we derive the insight that for some data-centric workloads, a high-end server optimizes performance per total cost of ownership (TCO), whereas for other workloads, a low-end server is the winner. This observation suggests heterogeneity in the datacenter, in which a job is run on the most cost-efficient server. Our experimental results report that a heterogeneous datacenter achieves an up to 88%, 24% and 17% improvement in cost-efficiency over a homogeneous high-end, commodity and low-end server datacenter, respectively
    corecore